Camera Testing Tool 2.0
Redesigning how Microsoft's device teams validate camera quality —
from fragmented spreadsheets to one decision-ready platform.
Microsoft's device program validates camera performance for every Windows-certified device before it ships — across 15+ vendors and 80+ quality metrics per device. Camera Testing Tool 2.0 is the platform I redesigned to make that validation process faster, clearer, and more consistent.
Producing a vendor report required pulling results from separate screens and pasting into an external document — hours wasted, inconsistency introduced every cycle.
Interpreting results required constantly switching to an external reference sheet to look up thresholds — slowing analysis and making it easy to miss boundary failures.
Every cycle meant jumping between tools that were never meant to work together — uploading device data in one place, looking up thresholds in a separate document, cross-checking per-device reports, then hand-compiling everything into a vendor deliverable.
Three methods — stakeholder interview, live workflow observation, and a legacy tool audit — all pointed to the same root cause: the tool made information available, but never helped testers act on it.
N=1 interview participant — every qualitative finding was cross-validated against workflow observation or the tool audit before informing a design decision.
Testers lived between tools, not inside one
Reviewing results across multiple devices required switching between separate per-device reports — no unified view existed.
Numbers without context slow decisions
Pass/fail thresholds weren't visible in the results view. Testers had to leave the tool to interpret every metric value.
Every answer required a workaround
Teams compensated for tool gaps with manual steps — comparing metrics in spreadsheets, summarising results by hand for stakeholders.
No screen showed the full picture
There was no consolidated view of device health across a vendor set — understanding overall quality required assembling information manually.
Every finding pointed to the same gap: the tool surfaced data but didn't structure it around decisions. These three goals shaped every decision that followed — from information architecture down to individual components.
Critical failures and device health visible before any drill-down. Decision-critical signals always above the fold — never buried in rows.
Evaluate multiple devices in a single view — no tab-switching, no manual assembly. The comparison is the default state, not an extra step.
Threshold guidance lives inside the results view. Testers interpret and decide without leaving the screen — no external reference needed.
The IA was rebuilt around four sequential screens — each owning exactly one moment in the workflow. Every design choice below maps directly to a validated research finding.
| Finding | Testers configured validation runs device-by-device — repeating the same setup steps for every hardware variant in a cycle, introducing inconsistency and wasting time. |
| Decision | Replaced single-device upload with a unified batch import flow. All devices are configured together in one session, with shared settings applied across the set. |
| Outcome | Consistent run configurations across devices. Reduced setup friction and eliminated a class of errors caused by per-device manual entry. |
| Finding | Testers couldn't interpret metric values without knowing the threshold — so they constantly switched to an external reference sheet, breaking focus and slowing analysis. |
| Decision | Redesigned the results view to embed threshold context, pass/fail indicators, and boundary proximity directly alongside every metric value. |
| Outcome | Testers can read and interpret results without leaving the screen. Faster analysis and fewer missed failures at the boundary. |
| Finding | Comparing metric performance across devices required opening separate per-device reports — a fragmented experience that made spotting patterns across a vendor set nearly impossible. |
| Decision | Introduced a unified comparison layer with tabbed metric categories and device toggles, enabling side-by-side analysis without switching screens. |
| Outcome | Reduced time-to-insight on cross-device analysis. Decision-critical signals visible at a glance, not buried in sequential reports. |
| Finding | After analysis, PMs manually assembled validation results into a vendor deliverable — copying data out of the tool into a separate document, a process that took hours and varied in format. |
| Decision | Designed a final reporting layer that transforms validated results directly into a structured, vendor-ready summary. One action — no manual compilation. |
| Outcome | Report generation goes from hours to minutes. Consistent format across all vendor deliverables, regardless of who creates them. |
Each flow traces a distinct user goal through the platform — with branching decision logic made explicit at every critical step.
A validation interface displays dozens of metrics per screen. Every token and component was chosen to reduce visual noise — not for aesthetics. Color encodes status (pass / fail / at-risk) so testers read results at a glance. The typography scale ensures dense data tables stay scannable without feeling cluttered.
The logo references the optical core of the product — a camera system — while abstracting it into a structured, geometric mark. It communicates measurement and validation, not photography. The form suggests a lens aperture or viewfinder grid: elements that signal controlled observation and systematic evaluation.
The geometric construction avoids decorative curves in favor of structured, grid-anchored shapes. This communicates technical accuracy and a system-driven process — the tool evaluates, measures, and reports. Clean geometry over expressive form signals that data integrity comes first.
This is not a consumer product. The logo reflects an internal, professional-grade validation system used by QA engineers and product teams — not end users. The mark is restrained, functional, and built to sit inside dashboards and reports without demanding attention.
Metrics estimated from workflow observation and PM review. Formal usability testing planned for Q2 2027.
Kochakorn demonstrated strong system thinking and quickly understood complex technical constraints, translating them into a clear, structured interface that improved metric visibility and usability. Her ability to simplify dense data views while maintaining accuracy was notable — as was her approach combining product mindset with execution discipline throughout the project.
In enterprise validation tools, every extra click scales into real errors. Reducing cognitive load isn't a preference — it's the most critical thing the interface can do.
N=1 access forced every insight to be cross-validated before informing a decision. The constraint raised the bar — findings had to earn their way in, not just appear.